Intonations play an important role in delivering the intention of a speaker. However, current end-to-end TTS systems often fail to model proper intonations. To alleviate this problem, we propose a novel, intuitive method to synthesize speech in different intonations using predefined intonation templates. Prior to TTS model training, speech data are grouped into intonation templates in an unsupervised manner. Two proposed modules are added to the end-to-end TTS framework: an intonation predictor and an intonation encoder. The intonation predictor recommends a suitable intonation template to the given text. The intonation encoder, attached to the text encoder output, synthesizes speech abiding the requested intonation template. Main contributions of our paper are: (a) an easy-to-use intonation control system covering a wide range of users; (b) better performance in wrapping speech in a requested intonation with improved objective and subjective evaluation; and (c) incorporating a pre-trained language model for intonation modelling. Audio samples are available at https://srtts.github.io/IntoTTS.
translated by 谷歌翻译
Image captioning is one of the straightforward tasks that can take advantage of large-scale web-crawled data which provides rich knowledge about the visual world for a captioning model. However, since web-crawled data contains image-text pairs that are aligned at different levels, the inherent noises (e.g., misaligned pairs) make it difficult to learn a precise captioning model. While the filtering strategy can effectively remove noisy data, however, it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency. To take the best of both worlds, we propose a noise-aware learning framework, which learns rich knowledge from the whole web-crawled data while being less affected by the noises. This is achieved by the proposed quality controllable model, which is learned using alignment levels of the image-text pairs as an additional control signal during training. The alignment-conditioned training allows the model to generate high-quality captions of well-aligned by simply setting the control signal to desired alignment level at inference time. Through in-depth analysis, we show that our controllable captioning model is effective in handling noise. In addition, with two tasks of zero-shot captioning and text-to-image retrieval using generated captions (i.e., self-retrieval), we also demonstrate our model can produce high-quality captions in terms of descriptiveness and distinctiveness. Code is available at \url{https://github.com/kakaobrain/noc}.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images, by using only image-text pairs without dense annotations. Existing open-world segmentation methods have shown impressive advances by employing contrastive learning (CL) to learn diverse visual concepts and adapting the learned image-level understanding to the segmentation task. However, these methods based on CL have a discrepancy since it only considers image-text level alignment in training time, while the segmentation task requires region-text level alignment at test time. In this paper, we propose a novel Text-grounded Contrastive Learning (TCL) framework to directly align a text and a region described by the text to address the train-test discrepancy. Our method generates a segmentation mask associated with a given text, extracts grounded image embedding from the masked region, and aligns it with text embedding via TCL. The framework addresses the discrepancy by letting the model learn region-text level alignment instead of image-text level alignment and encourages the model to directly improve the quality of generated segmentation masks. In addition, for a rigorous and fair comparison, we present a unified evaluation protocol with widely used 8 semantic segmentation datasets. TCL achieves state-of-the-art zero-shot segmentation performance with large margins in all datasets. Code is available at https://github.com/kakaobrain/tcl.
translated by 谷歌翻译
Autonomous navigation in crowded spaces poses a challenge for mobile robots due to the highly dynamic, partially observable environment. Occlusions are highly prevalent in such settings due to a limited sensor field of view and obstructing human agents. Previous work has shown that observed interactive behaviors of human agents can be used to estimate potential obstacles despite occlusions. We propose integrating such social inference techniques into the planning pipeline. We use a variational autoencoder with a specially designed loss function to learn representations that are meaningful for occlusion inference. This work adopts a deep reinforcement learning approach to incorporate the learned representation for occlusion-aware planning. In simulation, our occlusion-aware policy achieves comparable collision avoidance performance to fully observable navigation by estimating agents in occluded spaces. We demonstrate successful policy transfer from simulation to the real-world Turtlebot 2i. To the best of our knowledge, this work is the first to use social occlusion inference for crowd navigation.
translated by 谷歌翻译
当人类与机器人互动时,不可避免地会影响。考虑一辆在人类附近行驶的自动驾驶汽车:自动驾驶汽车的速度和转向将影响人类驾驶方式。先前的作品开发了框架,使机器人能够影响人类对所需行为的影响。但是,尽管这些方法在短期(即前几个人类机器人相互作用)中有效,但我们在这里探索了长期影响(即同一人与机器人之间的重复相互作用)。我们的主要见解是,人类是动态的:人们适应机器人,一旦人类学会预见机器人的行为,现在影响力的行为可能会失败。有了这种见解,我们在实验上证明了一种普遍的游戏理论形式主义,用于产生有影响力的机器人行为,而不是重复互动的有效性降低。接下来,我们为Stackelberg游戏提出了三个修改,这些游戏使机器人的政策具有影响力和不可预测性。我们最终在模拟和用户研究中测试了这些修改:我们的结果表明,故意使他们的行为更难预期的机器人能够更好地维持对长期互动的影响。在此处查看视频:https://youtu.be/ydo83cgjz2q
translated by 谷歌翻译
为了经济部署机器人操纵器,机器人动作的编程和执行必须迅速。为此,我们提出了一种基于新颖的,基于约束的方法,以直观地指定顺序操作任务,并为这种任务规范计算时间优势的机器人运动。我们的方法遵循基于约束的任务规范的思想,目的是建立最小和以对象为中心的任务描述,该描述在很大程度上与基础机器人运动学无关。我们将此任务描述转换为非线性优化问题。通过解决此问题,我们获得了(本地)最佳的机器人运动,而不仅仅是用于单个运动,还用于整个操作序列。我们在一系列涉及五个不同的机器人模型(包括高度冗余的移动操纵器)的实验中证明了我们方法的功能。
translated by 谷歌翻译
机器人需要多种互动模式来与人类在复杂的工业任务中进行稳健合作。我们开发了共存和共存(可可)人类机器人协作系统。共存模式使机器人能够在共享空间中独立地与人类在不同子任务上合作。合作模式使机器人能够遵循人类的指导并恢复失败。人类意图跟踪算法将人类和机器人运动测量作为输入,并提供了交互模式的开关。我们证明了可可系统在用例中类似于现实世界多步组件任务的有效性。
translated by 谷歌翻译
大多数最新的说话者验证架构都采用了多尺度处理和频道注意机制。这些模型的卷积层通常具有固定的内核大小,例如3或5。在本研究中,我们进一步为这一研究采用了选择性核心注意(SKA)机制。SKA机制允许每个卷积层以数据驱动的方式自适应地选择内核大小。它基于利用频率和通道域的注意机制。我们首先将现有的SKA模块应用于我们的基线。然后,我们提出了两个SKA变体,其中第一个变体在ECAPA-TDNN模型的前面应用,另一个变体与RES2NET骨干块结合使用。通过广泛的实验,我们证明了我们提出的两个SKA变体始终提高性能,并在三个不同的评估方案上进行测试时是互补的。
translated by 谷歌翻译
协作机器人需要有效的人类意图估算,以便在诸如人类意图不断变化的工业集会等结构化任务中安全,平稳地与人类合作。我们提出了意图跟踪的概念,并引入了一个协作机器人系统,该系统同时跟踪层次级别的意图。跟踪高级意图以估计人类的相互作用模式,并使机器人能够(1)避免与人碰撞以最大程度地减少中断或(2)帮助人类纠正失败。低级意图估算为机器人提供了特定任务的信息,以进行并发执行。我们在UR5E机器人上实现了该系统,并通过消融试验性研究在组装用例中展示了强大的,无缝和人体工程学的人类机器人协作。
translated by 谷歌翻译